A New Hybrid Algorithm for Convex Nonlinear Unconstrained Optimization
نویسندگان
چکیده
منابع مشابه
A new hybrid conjugate gradient algorithm for unconstrained optimization
In this paper, a new hybrid conjugate gradient algorithm is proposed for solving unconstrained optimization problems. This new method can generate sufficient descent directions unrelated to any line search. Moreover, the global convergence of the proposed method is proved under the Wolfe line search. Numerical experiments are also presented to show the efficiency of the proposed algorithm, espe...
متن کاملA New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems
In this paper, two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS}) conjugate gradient method are presented to solve unconstrained optimization problems. A remarkable property of the proposed methods is that the search direction always satisfies the sufficient descent condition independent of line search method, based on eigenvalue analysis. The globa...
متن کاملA Hybrid Algorithm for Convex Semidefinite Optimization
We present a hybrid algorithm for optimizing a convex, smooth function over the cone of positive semidefinite matrices. Our algorithm converges to the global optimal solution and can be used to solve general largescale semidefinite programs and hence can be readily applied to a variety of machine learning problems. We show experimental results on three machine learning problems. Our approach ou...
متن کاملA nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization
A new nonmonotone algorithm is proposed and analyzed for unconstrained nonlinear optimization. The nonmonotone techniques applied in this algorithm are based on the estimate sequence proposed by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, 2004) for convex optimization. Under proper assumptions, global convergence of this algorithm is established for minimizing genera...
متن کاملAn Algorithm for Unconstrained Quadratically Penalized Convex Optimization
A descent algorithm, “Quasi-Quadratic Minimization with Memory” (QQMM), is proposed for unconstrained minimization of the sum, F , of a non-negative convex function, V , and a quadratic form. Such problems come up in regularized estimation in machine learning and statistics. In addition to values of F , QQMM requires the (sub)gradient of V . Two features of QQMM help keep low the number of eval...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Applied Mathematics
سال: 2019
ISSN: 1110-757X,1687-0042
DOI: 10.1155/2019/8728196